645 research outputs found

    Smartphone applications for melanoma detection by community, patient and generalist clinician users: a review.

    Get PDF
    Smartphone health applications ('apps') are widely available but experts remain cautious about their utility and safety. We reviewed currently available apps for the detection of melanoma (July 2014), aimed at general community, patient and generalist clinician users. A proforma was used to extract and assess each app that met the inclusion criteria, and we undertook content analysis to evaluate their content and the evidence applied in their development. Thirty-nine apps were identified with the majority available only for Apple users. Over half (nĀ =Ā 22) provided information or education about melanoma, ultraviolet radiation exposure prevention advice, and skin self-examination strategies, mainly using the ABCDE (A, Asymmetry; B, Border; C, Colour; D, Diameter; E, Evolving) method. Half (nĀ =Ā 19) helped users take and store images of their skin lesions either for review by a dermatologist or for self-monitoring to identify change, an important predictor of melanoma; a similar number (nĀ =Ā 18) used reminders to help users monitor their skin lesions. A few (nĀ =Ā 9) offered expert review of images. Four apps provided a risk assessment to patients about the probability that a lesion was malignant or benign, and one app calculated users' future risk of melanoma. None of the apps appeared to have been validated for diagnostic accuracy or utility using established research methods. Smartphone apps for detecting melanoma by nonspecialist users have a range of functions including information, education, classification, risk assessment and monitoring change. Despite their potential usefulness, and while clinicians may choose to use apps that provide information to educate their patients, apps for melanoma detection require further validation of their utility and safety.This is the final published version. It first appeared at http://dx.doi.org/10.1111/bjd.13665

    Automated recovery of 3D models of plant shoots from multiple colour images

    Get PDF
    Increased adoption of the systems approach to biological research has focussed attention on the use of quantitative models of biological objects. This includes a need for realistic 3D representations of plant shoots for quantification and modelling. Previous limitations in single or multi-view stereo algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present a fully automatic approach to image-based 3D plant reconstruction that can be achieved using a single low-cost camera. The reconstructed plants are represented as a series of small planar sections that together model the more complex architecture of the leaf surfaces. The boundary of each leaf patch is refined using the level set method, optimising the model based on image information, curvature constraints and the position of neighbouring surfaces. The reconstruction process makes few assumptions about the nature of the plant material being reconstructed, and as such is applicable to a wide variety of plant species and topologies, and can be extended to canopy-scale imaging. We demonstrate the effectiveness of our approach on datasets of wheat and rice plants, as well as a novel virtual dataset that allows us to compute quantitative measures of reconstruction accuracy. The output is a 3D mesh structure that is suitable for modelling applications, in a format that can be imported in the majority of 3D graphics and software packages

    A systematic assessment of the association between frequently-prescribed medicines and the risk of common cancers : a series of nested case-control studies

    Get PDF
    Funding: This work was supported by Cancer Research UK (reference C37316/A25535). Acknowledgements: We wish to thank PCCIUR, University of Aberdeen, especially Artur Wozniak, for extracting the data and performing case-control matching.Peer reviewedPublisher PD

    Demonstration of quantum-enhanced rangefinding robust against classical jamming

    Full text link
    In this paper we demonstrate operation of a quantum-enhanced lidar based on a continuously pumped photon pair source combined with simple detection in regimes with over 5 orders of magnitude separation between signal and background levels and target reflectivity down to -52 dB. We characterise the performance of our detector using a log-likelihood analysis framework, and crucially demonstrate the robustness of our system to fast and slow classical jamming, introducing a new protocol to implement dynamic background tracking to eliminate the impact of slow background changes whilst maintaining immunity to high frequency fluctuations. Finally, we extend this system to the regime of rangefinding in the presence of classical jamming to locate a target with an 11 cm spatial resolution limited only by the detector jitter. These results demonstrate the advantage of exploiting quantum correlations for lidar applications, providing a clear route to implementation of this system in real-world scenarios

    Approaches to three-dimensional reconstruction of plant shoot topology and geometry

    Get PDF
    There are currently 805 million people classified as chronically undernourished, and yet the Worldā€™s population is still increasing. At the same time, global warming is causing more frequent and severe flooding and drought, thus destroying crops and reducing the amount of land available for agriculture. Recent studies show that without crop climate adaption, crop productivity will deteriorate. With access to 3D models of real plants it is possible to acquire detailed morphological and gross developmental data that can be used to study their ecophysiology, leading to an increase in crop yield and stability across hostile and changing environments. Here we review approaches to the reconstruction of 3D models of plant shoots from image data, consider current applications in plant and crop science, and identify remaining challenges. We conclude that although phenotyping is receiving an increasing amount of attention ā€“ particularly from computer vision researchers ā€“ and numerous vision approaches have been proposed, it still remains a highly interactive process. An automated system capable of producing 3D models of plants would significantly aid phenotyping practice, increasing accuracy and repeatability of measurements

    A patch-based approach to 3D plant shoot phenotyping

    Get PDF
    The emerging discipline of plant phenomics aims to measure key plant characteristics, or traits, though as yet the set of plant traits that should be measured by automated systems is not well defined. Methods capable of recovering generic representations of the 3D structure of plant shoots from images would provide a key technology underpinning quantification of a wide range of current and future physiological and morphological traits. We present a fully automatic approach to image-based 3D plant reconstruction which represents plants as series of small planar sections that together model the complex architecture of leaf surfaces. The initial boundary of each leaf patch is refined using a level set method, optimising the model based on image information, curvature constraints and the position of neighbouring surfaces. The reconstruction process makes few assumptions about the nature of the plant material being reconstructed. As such it is applicable to a wide variety of plant species and topologies, and can be extended to canopy-scale imaging. We demonstrate the effectiveness of our approach on real images of wheat and rice plants, an artificial plant with challenging architecture, as well as a novel virtual dataset that allows us to compute distance measures of reconstruction accuracy. We also illustrate the methodā€™s potential to support the identification of individual leaves, and so the phenotyping of plant shoots, using a spectral clustering approach

    Image-based 3D canopy reconstruction to determine potential productivity in complex multi-species crop systems

    Get PDF
    Background and Aims: Intercropping systems contain two or more species simultaneously in close proximity. Due to contrasting features of the component crops, quantification of the light environment and photosynthetic productivity is extremely difficult. However it is an essential component of productivity. Here, a low-tech but high resolution method is presented that can be applied to single and multi-species cropping systems, to facilitate characterisation of the light environment. Different row layouts of an intercrop consisting of Bambara groundnut (Vigna subterranea (L.) Verdc.) and Proso millet (Panicum miliaceum) have been used as an example and the new opportunities presented by this approach have been analysed. Methods: Three-dimensional plant reconstruction, based on stereocameras, combined with ray-tracing was implemented to explore the light environment within the Bambara groundnut-Proso millet intercropping system and associated monocrops. Gas exchange data was used to predict the total carbon gain of each component crop. Key Results: The shading influence of the tall Proso millet on the shorter Bambara groundnut results in a reduction in total canopy light interception and carbon gain. However, the increased leaf area index (LAI) of Proso millet, higher photosynthetic potential due to the C4 pathway and sub-optimal photosynthetic acclimation of Bambara groundnut to shade means that increasing the number of rows of millet will lead to greater light interception and carbon gain per unit ground area, despite Bambara groundnut intercepting more light per unit leaf area. Conclusions: Three-dimensional reconstruction combined with ray tracing provides a novel, accurate method of exploring the light environment within an intercrop that does not require difficult measurements of light interception and data-intensive manual reconstruction, especially for such systems with inherently high spatial possibilities. It provides new opportunities for calculating potential productivity within multispecies cropping systems; enables the quantification of dynamic physiological differences between crops grown as monoculture and those within intercrops or; enables the prediction of new productive combinations of previously untested crops

    Three-dimensional reconstruction of plant shoots from multiple images using an active vision system

    Get PDF
    The reconstruction of 3D models of plant shoots is a challenging problem central to the emerging discipline of plant phenomics ā€“ the quantitative measurement of plant structure and function. Current approaches are, however, often limited by the use of static cameras. We propose an automated active phenotyping cell to reconstruct plant shoots from multiple images using a turntable capable of rotating 360 degrees and camera mounted robot arm. To overcome the problem of static camera positions we develop an algorithm capable of analysing the environment and determining viewpoints from which to capture initial images suitable for use by a structure from motion technique

    Recovering Wind-induced Plant motion in Dense Field Environments via Deep Learning and Multiple Object Tracking

    Get PDF
    Understanding the relationships between local environmental conditions and plant structure and function is critical for both fundamental science and for improving the performance of crops in field settings. Wind-induced plant motion is important in most agricultural systems, yet the complexity of the field environment means that it remained understudied. Despite the ready availability of image sequences showing plant motion, the cultivation of crop plants in dense field stands makes it difficult to detect features and characterize their general movement traits. Here, we present a robust method for characterizing motion in field-grown wheat plants (Triticum aestivum) from time-ordered sequences of red, green and blue (RGB) images. A series of crops and augmentations was applied to a dataset of 290 collected and annotated images of ear tips to increase variation and resolution when training a convolutional neural network. This approach enables wheat ears to be detected in the field without the need for camera calibration or a fixed imaging position. Videos of wheat plants moving in the wind were also collected and split into their component frames. Ear tips were detected using the trained network, then tracked between frames using a probabilistic tracking algorithm to approximate movement. These data can be used to characterize key movement traits, such as periodicity, and obtain more detailed static plant properties to assess plant structure and function in the field. Automated data extraction may be possible for informing lodging models, breeding programmes and linking movement properties to canopy light distributions and dynamic light fluctuation
    • ā€¦
    corecore